Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
JASA Express Lett ; 2(11): 114401, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36456369

RESUMO

The potential binaural consequences of two envelope-based speech enhancement strategies (broadband compression and expansion) were examined. Sensitivity to interaural time differences imposed on four single-word stimuli was measured in listeners with normal hearing and sensorineural hearing loss. While there were no consistent effects of compression or expansion across all words, some potentially interesting word-specific effects were observed.


Assuntos
Compressão de Dados , Perda Auditiva Neurossensorial , Procedimentos Cirúrgicos Refrativos , Humanos , Fala
2.
Trends Hear ; 26: 23312165221095357, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35754372

RESUMO

While many studies have reported a loss of sensitivity to interaural time differences (ITDs) carried in the fine structure of low-frequency signals for listeners with hearing loss, relatively few data are available on the perception of ITDs carried in the envelope of high-frequency signals in this population. The relevant studies found stronger effects of hearing loss at high frequencies than at low frequencies in most cases, but small subject numbers and several confounding effects prevented strong conclusions from being drawn. In the present study, we revisited this question while addressing some of the issues identified in previous studies. Participants were ten young adults with normal hearing (NH) and twenty adults with sensorineural hearing impairment (HI) spanning a range of ages. ITD discrimination thresholds were measured for octave-band-wide "rustle" stimuli centered at 500 Hz or 4000 Hz, which were presented at 20 or 40 dB sensation level. Broadband rustle stimuli and 500-Hz pure-tone stimuli were also tested. Thresholds were poorer on average for the HI group than the NH group. The ITD deficit, relative to the NH group, was similar at low and high frequencies for most HI participants. For a small number of participants, however, the deficit was strongly frequency-dependent. These results provide new insights into the binaural perception of complex sounds and may inform binaural models that incorporate effects of hearing loss.


Assuntos
Surdez , Perda Auditiva , Estimulação Acústica/métodos , Percepção Auditiva , Testes Auditivos , Humanos , Adulto Jovem
3.
J Acoust Soc Am ; 150(2): 1311, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34470281

RESUMO

Previous studies have shown that for high-rate click trains and low-frequency pure tones, interaural time differences (ITDs) at the onset of stimulus contribute most strongly to the overall lateralization percept (receive the largest perceptual weight). Previous studies have also shown that when these stimuli are modulated, ITDs during the rising portion of the modulation cycle receive increased perceptual weight. Baltzell, Cho, Swaminathan, and Best [(2020). J. Acoust. Soc. Am. 147, 3883-3894] measured perceptual weights for a pair of spoken words ("two" and "eight"), and found that word-initial phonemes receive larger weight than word-final phonemes, suggesting a "word-onset dominance" for speech. Generalizability of this conclusion was limited by a coarse temporal resolution and limited stimulus set. In the present study, temporal weighting functions (TWFs) were measured for four spoken words ("two," "eight," "six," and "nine"). Stimuli were partitioned into 30-ms bins, ITDs were applied independently to each bin, and lateralization judgements were obtained. TWFs were derived using a hierarchical regression model. Results suggest that "word-initial" onset dominance does not generalize across words and that TWFs depend in part on acoustic changes throughout the stimulus. Two model-based predictions were generated to account for observed TWFs, but neither could fully account for the perceptual data.


Assuntos
Localização de Som , Estimulação Acústica , Julgamento , Fala
4.
J Acoust Soc Am ; 150(2): 1076, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34470293

RESUMO

This study aimed at predicting individual differences in speech reception thresholds (SRTs) in the presence of symmetrically placed competing talkers for young listeners with sensorineural hearing loss. An existing binaural model incorporating the individual audiogram was revised to handle severe hearing losses by (a) taking as input the target speech level at SRT in a given condition and (b) introducing a floor in the model to limit extreme negative better-ear signal-to-noise ratios. The floor value was first set using SRTs measured with stationary and modulated noises. The model was then used to account for individual variations in SRTs found in two previously published data sets that used speech maskers. The model accounted well for the variation in SRTs across listeners with hearing loss, based solely on differences in audibility. When considering listeners with normal hearing, the model could predict the best SRTs, but not the poorer SRTs, suggesting that other factors limit performance when audibility (as measured with the audiogram) is not compromised.


Assuntos
Inteligibilidade da Fala , Percepção da Fala , Limiar Auditivo , Individualidade , Ruído/efeitos adversos , Teste do Limiar de Recepção da Fala
5.
J Acoust Soc Am ; 147(6): 3883, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-32611137

RESUMO

Numerous studies have demonstrated that the perceptual weighting of interaural time differences (ITDs) is non-uniform in time and frequency, leading to reports of spectral and temporal "dominance" regions. It is unclear however, how these dominance regions apply to spectro-temporally complex stimuli such as speech. The authors report spectro-temporal weighting functions for ITDs in a pair of naturally spoken speech tokens ("two" and "eight"). Each speech token was composed of two phonemes, and was partitioned into eight frequency regions over two time bins (one time bin for each phoneme). To derive lateralization weights, ITDs for each time-frequency bin were drawn independently from a normal distribution with a mean of 0 and a standard deviation of 200 µs, and listeners were asked to indicate whether the speech token was presented from the left or right. ITD thresholds were also obtained for each of the 16 time-frequency bins in isolation. The results suggest that spectral dominance regions apply to speech, and that ITDs carried by phonemes in the first position of the syllable contribute more strongly to lateralization judgments than ITDs carried by phonemes in the second position. The results also show that lateralization judgments are partially accounted for by ITD sensitivity across time-frequency bins.


Assuntos
Localização de Som , Percepção da Fala , Estimulação Acústica , Fala
6.
J Acoust Soc Am ; 147(3): 1546, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-32237845

RESUMO

Listeners with sensorineural hearing loss routinely experience less spatial release from masking (SRM) in speech mixtures than listeners with normal hearing. Hearing-impaired listeners have also been shown to have degraded temporal fine structure (TFS) sensitivity, a consequence of which is degraded access to interaural time differences (ITDs) contained in the TFS. Since these "binaural TFS" cues are critical for spatial hearing, it has been hypothesized that degraded binaural TFS sensitivity accounts for the limited SRM experienced by hearing-impaired listeners. In this study, speech stimuli were noise-vocoded using carriers that were systematically decorrelated across the left and right ears, thus simulating degraded binaural TFS sensitivity. Both (1) ITD sensitivity in quiet and (2) SRM in speech mixtures spatialized using ITDs (or binaural release from masking; BRM) were measured as a function of TFS interaural decorrelation in young normal-hearing and hearing-impaired listeners. This allowed for the examination of the relationship between ITD sensitivity and BRM over a wide range of ITD thresholds. This paper found that, for a given ITD sensitivity, hearing-impaired listeners experienced less BRM than normal-hearing listeners, suggesting that binaural TFS sensitivity can account for only a modest portion of the BRM deficit in hearing-impaired listeners. However, substantial individual variability was observed.


Assuntos
Surdez , Perda Auditiva Neurossensorial , Perda Auditiva , Percepção da Fala , Limiar Auditivo , Audição , Perda Auditiva Neurossensorial/diagnóstico , Humanos , Ruído/efeitos adversos , Mascaramento Perceptivo , Fala
7.
Neuroimage ; 200: 490-500, 2019 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-31254649

RESUMO

Natural speech is organized according to a hierarchical structure, with individual speech sounds combining to form abstract linguistic units, and abstract linguistic units combining to form higher-order linguistic units. Since the boundaries between these units are not always indicated by acoustic cues, they must often be computed internally. Signatures of this internal computation were reported by Ding et al. (2016), who presented isochronous sequences of mono-syllabic words that combined to form phrases that combined to form sentences, and showed that cortical responses simultaneously encode boundaries at multiple levels of the linguistic hierarchy. In the present study, we designed melodic sequences that were hierarchically organized according to Western music conventions. Specifically, isochronous sequences of "sung" nonsense syllables were constructed such that syllables combined to form triads outlining individual chords, which combined to form harmonic progressions. EEG recordings were made while participants listened to these sequences with the instruction to detect when violations in the sequence structure occurred. We show that cortical responses simultaneously encode boundaries at multiple levels of a melodic hierarchy, suggesting that the encoding of hierarchical structure is not unique to speech. No effect of musical training on cortical encoding was observed.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Neuroimagem Funcional , Música , Adolescente , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Percepção da Fala/fisiologia , Adulto Jovem
8.
J Acoust Soc Am ; 144(5): 2662, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30522300

RESUMO

While wide dynamic range compression (WDRC) is a standard feature of modern hearing aids, it can be difficult to fit compression settings to individual hearing aid users. The goal of the current study was to develop a practical test to learn the preference of individual listeners for different compression ratio (CR) settings in different listening conditions (speech-in-quiet and speech-in-noise). While it is possible to exhaustively test different CR settings, such methods can take many hours to complete, making them impractical. Bayesian optimization methods were used to find CR preferences in individual listeners in a relatively short amount of time. Using this practical preference learning test, individual differences in CR preference were examined across a relatively wide range of CR settings in different listening conditions. In experiment 1, the accuracy of the preference learning test in normal hearing listeners was verified. In experiment 2, it is shown that individual hearing impaired listeners differ in their CR preferences, and listeners tended to prefer the CR setting identified by the preference learning test over both linear gain or the National Acoustics Lab--Nonlinear 2 CR prescription based on their audiograms.


Assuntos
Percepção Auditiva/fisiologia , Auxiliares de Audição/tendências , Ruído/efeitos adversos , Preferência do Paciente/estatística & dados numéricos , Estimulação Acústica/métodos , Adulto , Idoso , Algoritmos , Teorema de Bayes , Compressão de Dados , Feminino , Análise de Fourier , Testes Auditivos/métodos , Humanos , Individualidade , Masculino , Pessoa de Meia-Idade , Pessoas com Deficiência Auditiva/reabilitação , Pessoas com Deficiência Auditiva/estatística & dados numéricos
9.
J Neurophysiol ; 118(6): 3144-3151, 2017 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-28877963

RESUMO

It has been suggested that cortical entrainment plays an important role in speech perception by helping to parse the acoustic stimulus into discrete linguistic units. However, the question of whether the entrainment response to speech depends on the intelligibility of the stimulus remains open. Studies addressing this question of intelligibility have, for the most part, significantly distorted the acoustic properties of the stimulus to degrade the intelligibility of the speech stimulus, making it difficult to compare across "intelligible" and "unintelligible" conditions. To avoid these acoustic confounds, we used priming to manipulate the intelligibility of vocoded speech. We used EEG to measure the entrainment response to vocoded target sentences that are preceded by natural speech (nonvocoded) prime sentences that are either valid (match the target) or invalid (do not match the target). For unintelligible speech, valid primes have the effect of restoring intelligibility. We compared the effect of priming on the entrainment response for both 3-channel (unintelligible) and 16-channel (intelligible) speech. We observed a main effect of priming, suggesting that the entrainment response depends on prior knowledge, but not a main effect of vocoding (16 channels vs. 3 channels). Furthermore, we found no difference in the effect of priming on the entrainment response to 3-channel and 16-channel vocoded speech, suggesting that for vocoded speech, entrainment response does not depend on intelligibility.NEW & NOTEWORTHY Neural oscillations have been implicated in the parsing of speech into discrete, hierarchically organized units. Our data suggest that these oscillations track the acoustic envelope rather than more abstract linguistic properties of the speech stimulus. Our data also suggest that prior experience with the stimulus allows these oscillations to better track the stimulus envelope.


Assuntos
Córtex Cerebral/fisiologia , Priming de Repetição , Inteligibilidade da Fala , Percepção da Fala , Adulto , Feminino , Humanos , Masculino
10.
Brain Res ; 1644: 203-12, 2016 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-27195825

RESUMO

Recent studies have uncovered a neural response that appears to track the envelope of speech, and have shown that this tracking process is mediated by attention. It has been argued that this tracking reflects a process of phase-locking to the fluctuations of stimulus energy, ensuring that this energy arrives during periods of high neuronal excitability. Because all acoustic stimuli are decomposed into spectral channels at the cochlea, and this spectral decomposition is maintained along the ascending auditory pathway and into auditory cortex, we hypothesized that the overall stimulus envelope is not as relevant to cortical processing as the individual frequency channels; attention may be mediating envelope tracking differentially across these spectral channels. To test this we reanalyzed data reported by Horton et al. (2013), where high-density EEG was recorded while adults attended to one of two competing naturalistic speech streams. In order to simulate cochlear filtering, the stimuli were passed through a gammatone filterbank, and temporal envelopes were extracted at each filter output. Following Horton et al. (2013), the attended and unattended envelopes were cross-correlated with the EEG, and local maxima were extracted at three different latency ranges corresponding to distinct peaks in the cross-correlation function (N1, P2, and N2). We found that the ratio between the attended and unattended cross-correlation functions varied across frequency channels in the N1 latency range, consistent with the hypothesis that attention differentially modulates envelope-tracking activity across spectral channels.


Assuntos
Atenção/fisiologia , Córtex Cerebral/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Vias Auditivas/fisiologia , Eletroencefalografia , Potenciais Evocados , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino , Processamento de Sinais Assistido por Computador , Acústica da Fala , Adulto Jovem
11.
Am J Audiol ; 25(1): 75-83, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26989823

RESUMO

PURPOSE: Understanding speech in background noise is difficult for many individuals; however, time constraints have limited its inclusion in the clinical audiology assessment battery. Phoneme scoring of words has been suggested as a method of reducing test time and variability. The purposes of this study were to establish a phoneme scoring rubric and use it in testing phoneme and word perception in noise in older individuals and individuals with hearing impairment. METHOD: Words were presented to 3 participant groups at 80 dB in speech-shaped noise at 7 signal-to-noise ratios (-10 to 35 dB). Responses were scored for words and phonemes correct. RESULTS: It was not surprising to find that phoneme scores were up to about 30% better than word scores. Word scoring resulted in larger hearing loss effect sizes than phoneme scoring, whereas scoring method did not significantly modify age effect sizes. There were significant effects of hearing loss and some limited effects of age; age effect sizes of about 3 dB and hearing loss effect sizes of more than 10 dB were found. CONCLUSION: Hearing loss is the major factor affecting word and phoneme recognition with a subtle contribution of age. Phoneme scoring may provide several advantages over word scoring. A set of recommended phoneme scoring guidelines is provided.


Assuntos
Audiometria da Fala/métodos , Perda Auditiva/fisiopatologia , Percepção da Fala , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Audiometria de Tons Puros , Limiar Auditivo , Estudos de Casos e Controles , Feminino , Perda Auditiva/diagnóstico , Humanos , Masculino , Pessoa de Meia-Idade , Fonética , Razão Sinal-Ruído , Adulto Jovem
12.
Clin Neurophysiol ; 126(7): 1319-30, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25453611

RESUMO

OBJECTIVE: To use cortical auditory evoked potentials (CAEPs) to understand neural encoding in background noise and the conditions under which noise enhances CAEP responses. METHODS: CAEPs from 16 normal-hearing listeners were recorded using the speech syllable/ba/presented in quiet and speech-shaped noise at signal-to-noise ratios of 10 and 30dB. The syllable was presented binaurally and monaurally at two presentation rates. RESULTS: The amplitudes of N1 and N2 peaks were often significantly enhanced in the presence of low-level background noise relative to quiet conditions, while P1 and P2 amplitudes were consistently reduced in noise. P1 and P2 amplitudes were significantly larger during binaural compared to monaural presentations, while N1 and N2 peaks were similar between binaural and monaural conditions. CONCLUSIONS: Methodological choices impact CAEP peaks in very different ways. Negative peaks can be enhanced by background noise in certain conditions, while positive peaks are generally enhanced by binaural presentations. SIGNIFICANCE: Methodological choices significantly impact CAEPs acquired in quiet and in noise. If CAEPs are to be used as a tool to explore signal encoding in noise, scientists must be cognizant of how differences in acquisition and processing protocols selectively shape CAEP responses.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Ruído , Adolescente , Adulto , Potenciais Evocados/fisiologia , Feminino , Humanos , Masculino , Razão Sinal-Ruído , Fala , Fatores de Tempo , Adulto Jovem
13.
Clin Neurophysiol ; 125(2): 370-80, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24007688

RESUMO

OBJECTIVE: The purpose of this study was to determine the effects of SNR and signal level on the offset response of the cortical auditory evoked potential (CAEP). Successful listening often depends on how well the auditory system can extract target signals from competing background noise. Both signal onsets and offsets are encoded neurally and contribute to successful listening in noise. Neural onset responses to signals in noise demonstrate a strong sensitivity to signal-to-noise ratio (SNR) rather than signal level; however, the sensitivity of neural offset responses to these cues is not known. METHODS: We analyzed the offset response from two previously published datasets for which only the onset response was reported. For both datasets, CAEPs were recorded from young normal-hearing adults in response to a 1000-Hz tone. For the first dataset, tones were presented at seven different signal levels without background noise, while the second dataset varied both signal level and SNR. RESULTS: Offset responses demonstrated sensitivity to absolute signal level in quiet, SNR, and to absolute signal level in noise. CONCLUSIONS: Offset sensitivity to signal level when presented in noise contrasts with previously published onset results. SIGNIFICANCE: This sensitivity suggests a potential clinical measure of cortical encoding of signal level in noise.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica/métodos , Adulto , Humanos , Ruído , Razão Sinal-Ruído
14.
Int J Otolaryngol ; 2012: 365752, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23093964

RESUMO

The clinical usefulness of aided cortical auditory evoked potentials (CAEPs) remains unclear despite several decades of research. One major contributor to this ambiguity is the wide range of variability across published studies and across individuals within a given study; some results demonstrate expected amplification effects, while others demonstrate limited or no amplification effects. Recent evidence indicates that some of the variability in amplification effects may be explained by distinguishing between experiments that focused on physiological detection of a stimulus versus those that differentiate responses to two audible signals, or physiological discrimination. Herein, we ask if either of these approaches is clinically feasible given the inherent challenges with aided CAEPs. N1 and P2 waves were elicited from 12 noise-masked normal-hearing individuals using hearing-aid-processed 1000-Hz pure tones. Stimulus levels were varied to study the effect of hearing-aid-signal/hearing-aid-noise audibility relative to the noise-masked thresholds. Results demonstrate that clinical use of aided CAEPs may be justified when determining whether audible stimuli are physiologically detectable relative to inaudible signals. However, differentiating aided CAEPs elicited from two suprathreshold stimuli (i.e., physiological discrimination) is problematic and should not be used for clinical decision making until a better understanding of the interaction between hearing-aid-processed stimuli and CAEPs can be established.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...